Optimizing a Near-duplicate Document Detection System with SIMD Technologies
نویسندگان
چکیده
Although considerable effort has been devoted to duplicate document detection (DDD) and its applications, there is very limited study on the optimization of its time-consuming functions. An experimental analysis which is conducted on a million Grant Proposal documents from the nsfc.gov.cn shows that even by using the clustering and the sampling methods, the speed of DDD is still quite slow. By analyzing the performance of our system with Intel VTune Performance Analyzer, we find out that the shingle comparison is the most time-consuming part in our system, occupying 58% CPU usage. Based on the analysis of the whole non-parallel algorithm and the data statistics, we propose and implement an optimized shingle comparison algorithm using Intel SIMD technology and GPUs. Experiments done with Intel CPUs demonstrate 11.6% ~38.5% performance gains with different SIMD instruction sets (SSE/SSE2/SSE4.2) and parameters settings. Furthermore, our GPU implementation achieves a 170% performance gain. Higher performance could be obtained by combining these two SIMD technologies.
منابع مشابه
Identifying and Indexing Near-Duplicate Images Using Optimizing Technique in Web Search
Today's World Wide Web is growing drastically and duplicates occur in many fields. Importantly duplicate images that are uploaded into internet like a food product, document image, medical images, textile fields etc. So it becomes very important to identify those duplicate images. Near duplicates can be similar copies or differ a little in their visual content. Duplicate images introduce many p...
متن کاملAnomaly Detection Using SVM as Classifier and Decision Tree for Optimizing Feature Vectors
Abstract- With the advancement and development of computer network technologies, the way for intruders has become smoother; therefore, to detect threats and attacks, the importance of intrusion detection systems (IDS) as one of the key elements of security is increasing. One of the challenges of intrusion detection systems is managing of the large amount of network traffic features. Removing un...
متن کاملDuplicate Web Pages Detection with the Support of 2d Table Approach
Duplicate and near duplicate web pages are stopping the process of search engine. As a consequence of duplicate and near duplicates, the common issue for the search engines is raising the indexed storage pages. This high storage memory will slow down the process which automatically increases the serving cost. Finally, the duplication will be raised while gathering the required data from the var...
متن کاملNear Duplicate Document Detection for Large Information Flows
Near duplicate documents and their detection are studied to identify info items that convey the same (or very similar) content, possibly surrounded by diverse sets of side information like metadata, advertisements, timestamps, web presentations and navigation supports, and so on. Identification of near duplicate information allows the implementation of selection policies aiming to optimize an i...
متن کاملA Near-duplicate Detection Algorithm to Facilitate Document Clustering
Web Ming faces huge problems due to Duplicate and Near Duplicate Web pages. Detecting Near Duplicates is very difficult in large collection of data like ”internet”. The presence of these web pages plays an important role in the performance degradation while integrating data from heterogeneous sources. These pages either increase the index storage space or increase the serving costs. Detecting t...
متن کامل